Net Promoter Score® Discussion at Customer Loyalty Forum
Great Brook recently kicked off the Customer Loyalty Forum in the Boston area. On March 10, 2010 we held our first breakfast meeting with over 30 people from Boston area companies discussing the merits of the Net Promoter® Score and other issues in capturing and applying customer feedback. It was a very lively discussion with agreement on many ideas but with different practices elicited across the companies in the group. Here are notes from the meeting which I’ve arranged by topic. I have omitted company names for confidentiality reasons.
First of all, the legal covenants… Net Promoter®, NPS®, and Net Promoter Score® are trademarks of Satmetrix Systems, Inc., Bain & Company, and Fred Reichheld.
Background: What is Net Promoter Score® (NPS®). Reichheld in his article and his book advocate asking the “Ultimate Question” on surveys: How likely are you to recommend our product or service to friends or colleagues? Their research indicates that responses to this question are the best single indicator of long-term profitability. Respondents are classified into three groups. Promoters score the question with a 9 or 10. Passives gave a 7 or 8. Detractors gave a 0 to 6 scoring. (Yes, it’s an 11-point, 0-to10 scale.) NPS® is calculated by subtracting the percentage who were Detractors from the percentage of Promoters.
Follow the link for my own review of Reichheld’s HBR article. I also just came across a 2007 article in the Journal of Marketing where the authors attempt to replicate Reichheld’s Net Promoter ® study. (Keiningham, Timothy L. et al., “A Longitudinal Examination of Net Promoter and Firm Revenue Growth,” Journal of Marketing, Vol. 71, July 2007, pp. 39–51.)
11-point Scales Are Not Essential. One participant uses a 5-point scale, others a 10-point scale, and one used the 11-point scale. Two issues were noted. First, if you change scales, as one participant did, you essentially have to start trend lines anew. Converting findings from 10-point scales to 5-point scales is difficult. Second, cross-company comparisons are only really valid with the same scale. This person added that the only benchmark that really matters is the benchmark with your own company. One attendee mentioned a study that examined the impact of different scale design. Caution, this is a 94-page academic study. (Note: The link I had to the study is now a dead link.)
Make Sure the Survey Questions are Relevant. One participant discussed how the surveying and NPS® craze at his company had proved counterproductive. They were conducting transactional surveys for technical support events, and others in the company kept adding questions to the survey that were not relevant to the support transaction. (Another attendee, whose company now only does transactional surveys called this practice a “relatiotransactional survey.”) With the change to a simpler survey, the response rate stayed around 40%, but the completion rate once a respondent had started a survey jumped from 90% to 98%.
Make Sure the Summary Question is Logical. The above person said that respondents saw the overall recommendation and loyalty questions as illogical. They would see comments on the survey, such as, “Why are you asking me about my loyalty. I have no choice but to call you when I need tech support.” Reichheld recommends using the classic recommendation question on relationship surveys – those surveys that are broader in scope about the relationship. But many companies use it on transactional surveys.
Make Sure the Summary Question is Not Ambiguous. When net scoring is used on a transactional survey, are respondents answering based on the immediate transaction or on the broader relationship? It’s unclear and ambiguous. Additionally, some people in a business-to-business setting are not allowed to make recommendations. So one attendee makes the question conditional: “Assuming you were allowed to make recommendations, how likely…”
Increasing Response Rates. One person with a very small customer population asked how to increase response rates. This led to a broad discussion. Some use incentives, e.g., a monthly raffle of a $25 Amazon certificate. Another commented that they used to give company product as a reward, but that was useless. A $5 Starbucks card or something small that they could bring home to their kids proved a more effective reward. But are the data any good? One person noted that if it takes an incentive to complete a survey, doesn’t that show you’re not really engaged? After the session a participant commented that rewards cannot be accepted
Reminders were used, but if you’re surveying the same group repeatedly, people don’t use multiple reminders and have a no-repeat-survey window of 30 to 90 days.
Two people commented about the impact on response rates of providing feedback to customers. One summarized the learning from previous surveys in the cover note for subsequent surveys to show they really listen and take action. Another takes customers on factory tours where a display board presents recent survey results. This helps sell the value of responding to customers.
Net Scores Will Differ from Transactional to Relationship Surveys. One attendee stated that due to the “personal nature of a transactional survey, respondents are far more gracious in their replies.” Scores tend to be lower on the relationship surveys because they incorporate things beyond the immediate interaction. He felt that relationship surveys are a better measure of where you truly stand with the customer than are transactional surveys. He stated, “Relationship NPS® is top-down; transactional NPS® is bottom-up.” A colleague of his added that they view the transactional survey as a “predictor of what’s happening a few quarters later in the relationship surveys.” Since the people taking the transactional surveys are likely to grow in the management ranks over the years, this firm tries to address any issues with that customer there and then.
Fully Anchoring Leads to Fewer Comments. Scales with descriptors – anchors in surveying lingo – tend to garner fewer comments. The assumption is that respondents feel they have told you how they feel when they’ve selected a response with descriptive words. So, if you value comments – and most people do – then scales with anchors for the endpoints only may lead to more comments.
Satisfied People Don’t Comment. One company uses a 5-point scale with descriptive anchors for each scale point where 4 equals Satisfied. They have found that respondents will write comments for 5s, 3s, 2s, and 1s – though they get few very low scores. But they get no comments for 4s. Respondents apparently don’t see a need to explain why they’re satisfied. That makes sense!
The Analytical Method Affects Interpretability and Sense of Urgency. The goal of statistical analysis of any data set is to summarize all the messy raw data into a few understandable statistics. One attendee’s company uses the mean score as the key statistic. Others used “top box scoring,” that is, the percent of respondents who gave scores in the top response categories on the scale. Another says his company has “taken to netting everything.” They use the NPS® logic but with a 5-point scale, subtracting percent of 1s and 2s from the 5s. “Net score tends to amplify differences.” Another person added that the net score framework provides a better way to talk to their employees about customer interactions. Saying “we want to create a 5” doesn’t have the same impact as changing a net score. Interpretability helps implementation of the cultural change.
The System, Not the Metric, is Key. Perhaps the most important lesson presented was running the survey program and calculating statistics is the easy part. To get action taken requires a “culturalization” process. The metric alone does nothing. It must be implemented and “socialized” within the management system. One company said they went through a massive “amount of effort to market the metric to the company.” Managers are now hearing feedback and are expected to take action on it. Companies tend to underestimate the change management requirements of implementing a feedback-oriented organizational culture.
Transactional Surveys Drive Action. The company that moved strictly to transactional surveys in a very comprehensive program said the key advantage was making the findings more personal, creating an incentive to take action. Now “people couldn’t say, ‘it wasn’t really about me.'”
Net Scores Aren’t Directly Actionable. Many people emphasized, in agreement with Reichheld, that survey scores are abstract and don’t point to what action needs to be taken. “If the metric is not actionable, then it’s useless.” No rating scale score will ever really be actionable on its own. This person questioned whether “we should have a fairly open conversation with customers to get to the heart of the issues?” The lesson is that a survey can’t be the only tool in the arsenal. Comments on a survey can help. One person said that they developed action plans for one manager from the survey comments. In 90 days that manager’s net score increased by 8%. That got others looking at the value of the survey results.
But… Do Net Scores Truly Reflect Behavioral Changes? The bottom-line question is: does NPS® indicate a change in customer purchasing behavior? Reichheld’s study indicates it does. None of the participants said they could attest to seeing a cause-and effect relationship. One said that longer standing customer had higher net scores, but that only makes sense. Unhappy customers would leave. Does that prove causation? No one could say they’ve seen a change in buying patterns as a customer’s net score changed.
However, knowing a customer’s net score could be valuable for marketing reasons. One firm noted that the net scores identified customers who could be used for promotional outreach as reference accounts.
Yes, we covered a lot of ground in 90 minutes! Discussion to continue…